-
Notifications
You must be signed in to change notification settings - Fork 271
feat(chat_models): Enable tool calling for ChatMLX #318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(chat_models): Enable tool calling for ChatMLX #318
Conversation
|
@diego-coder thank you and apologies for the delay. working thru the queue |
|
Honestly @mdrxy you work too hard, I see you! I see what you do around here. I will update my others as well since I am on. I just updated #209 which you just checked on with a merge, so conflicts there should now be resolved. I will send like, one more message with the complete package so I'm not repeatedly pinging you, I'm guessing your inbox is a four alarm 🔥. Get to them whenever you can |
df2c96a to
b06ac9d
Compare
df9948b to
3d45f27
Compare
|
Hi @mdrxy, As promised, here is the final summary of my open PRs. I've just finished running the full They are all up-to-date with Ready for Review (All CI should pass)
Special case here
Thanks again for all your help and for working through this backlog with me. No rush |
Fixes #308
This PR creates end-to-end tool-calling support for ChatMLX, allowing it to correctly use tools with compatible models running locally on Apple Silicon.
Context and History
This is a new, comprehensive approach to resolving #308 (assigned by @mdrxy). This work supersedes the previous attempt in PR #133, which was confirmed to be non-functional and is now stale with unresolved conflicts. This new implementation is fully functional, follows LangChain Core conventions, and is covered by extensive testing.
Technical Implementation
The solution is broken down into three main parts:
Prompt Formatting: The _to_chat_prompt method has been updated to accept a tools argument, which is then passed to the tokenizer's apply_chat_template. This ensures the model receives tool definitions in the correct format, enabling it to generate tool-call responses.
Response Parsing: The _to_chat_result method has been completely overhauled with a dual-strategy parsing approach:
It first checks for structured tool call data in generation_info, leveraging a model's native tool-calling capabilities when available.
If no native tool calls are found, it falls back to a ReAct-style text parser (Action: / Action Input:) to extract tool calls from the raw text, ensuring compatibility with a wide range of models.
It correctly uses langchain_core helpers to create ToolCall and InvalidToolCall objects.